Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Database
Language
Document Type
Year range
1.
2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2051946

ABSTRACT

Machine Learning (ML) models play an important role in healthcare thanks to their remarkable performance in predicting complex phenomena. During the COVID-19 pandemic, different ML models were implemented to support decisions in the medical settings. However, clinical experts need to ensure that these models are valid, provide clinically useful information, and are implemented and used correctly. In this vein, they need to understand the logic behind the models to be able to trust them. Hence, developing transparent and interpretable models has increasing relevance. In this work, we applied four interpretable ML models including logistic regression, decision tree, pyFUME, and RIPPER to classify suspected COVID-19 patients based on clinical data collected from blood samples. After preprocessing the data set and training the models, we evaluate the models based on their predictive performance. Then, we illustrate that interpretability can be achieved in different ways. First, SHAP explanations are built from logistic regression and decision trees to obtain the features' importance. Then, the potential of pyFUME and RIPPER in providing inherent interpretability are reflected. Finally, potential ways to achieve trust in future studies are briefly discussed. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL